DR. AJAY KUMAR PATHAK
ASSISTANT PROFESSOR
READ ALL THE NOTES CHAPTER WISE
MINOR PAPER
SUBJECT NAME:- MN–2C (Th):- SOFTWARE TESTING
FOR B. Sc. IT.
SEM 6 F.Y.U.G.P.
UNIT 4 (UNIT NAME) :- PERFORMANCE AND SECURITY TESTING
Copyright © by Dr. Ajay kumar pathak
B. Sc. IT. SEMESTER 6 NOTES BASED ON NEP
SUBJECT : MN–2C (Th): SOFTWARE TESTING
(To be selected by the students from)
UNIT 4 (UNIT NAME):- PERFORMANCE AND SECURITY TESTING
Semester Examination and Distribution of Marks
Semester Internal Examination (SIE):- 25 Marks
End Semester Examination (ESE) :- 75 Marks
Objective: The objective of this course is to provide students with an understanding of software testing principles, techniques, and methodologies. The course aims to develop students' skills in designing test cases, executing tests, and reporting defects.
Course Outcome: By the end of this course, students should be able to:
· Understand the importance of software testing in the software development life cycle.
· Apply different testing techniques and methodologies.
· Design and execute test cases to verify software functionality.
· Identify and report software defects effectively.
· Understand the role of automated testing tools in software testing.
UNIT 4 :- NAME :- PERFORMANCE
AND SECURITY TESTING
WHAT IS TEST
EXECUTION?:- Test Execution is a process
of running test cases based on test scenarios created for software applications
to ensure that it meets all the pre-defined functional and non-functional
requirements or specifications.
In this phase, the tests are
categorized and executed according to a test plan. The entire test plan intends
to break the whole application into individual components and involves detailed
test cases for each. The individual and comprehensive testing of each component
helps ensure that the application works seamlessly overall.
Test execution ensures the
functionality, performance, and usability of software before its release. This
also helps us provide data to ascertain if the test plan and test cases were
effective in allowing the product to meet its goals.
Types of test
execution:- Test execution is much more
than it seems on the surface. Teams rely on different types of test execution
depending on the context, risks, and project maturity.
Some test executions
are:-
(1) Manual test case execution:- Manual test case execution means that testers follow
each step in the test plan themselves instead of using tools or scripts. It’s a
hands-on process that helps find unexpected behavior. The approach works well
for exploratory tests and complex scenarios that need human judgment.
A written manual test case
should outline what to test and how to perform the test. It should also specify
what results to look for. Although manual testing takes time, it provides the
flexibility that testers need to dig deeper when something doesn’t look right.
(2) Automated test case execution:- Automated test case execution relies on tools or
scripts to carry out tests without direct human involvement. Once the scripts
are ready, the process runs with minimal effort. Sometimes, all it takes is a
single command.
The method is ideal for
repetitive or large test cycles where speed is important. Automation is also
useful for regression testing, performance checks, and any area that benefits from
running the same test multiple times. It also helps teams save time and focus
human effort on more complex testing needs.
(3) Semi-automated test case execution:- Semi-automated testing combines manual and automated
steps. Some parts of the process, such as data setup or validation, may still
require a tester’s input. Automation tools handle the remaining steps. It’s a
balanced option when teams want to speed up repetitive tasks but still need
human supervision for parts that require reasoning or detailed observation.
Stages of Test
Execution:- Test Execution Process can be classified into 3
different stages:-
(1) Test Planning
(2) Test Execution
(3) Test Evaluation
(1) Test Planning:- The Test Planning phase involves the following steps:
·
Design the test
plan as per the requirement specification document
·
Define the test
objectives
·
Setup the test
environment
·
Identify the
testing tools
·
Identify a test
reporting tool
(2) Test Execution:- After Test Plan is ready. Test Execution process
involves creating, managing, and running the test cases. These test cases can
be run manually or using automated frameworks. Test Execution involves the
following steps:
·
Creating Test
Cases
·
Writing Test
Scripts / Test Steps
·
Running Test
Cases
During the test run, actual results
are compared with expected outcomes for each test case. If it matches, then the
test case passes; otherwise, it fails.
(3) Test Evaluation:- Test
Evaluation is the final step which involves:-
·
Analyzing the
test run results
·
Reporting new
defects
·
Monitoring the
cases for which an existing bug was resolved
·
Bug closure
·
Test
reporting
·
Analyzing whether
the exit criteria are being met
Advantages of Test
Execution:-
(1) Detects Bugs Early :- Helps find defects before software
release
(2) Improves Software Quality :- Ensures the product works
correctly
(3) Validates Requirements :- Confirms system meets user needs
(4) Builds User Confidence :- Reliable software increases trust
(5) Supports Decision Making :- Helps decide whether software is
ready to launch
(6) Ensures System Stability :- Reduces chances of failure in
real-world use
Disadvantages of Test
Execution:-
(1) Time-Consuming :- Running many test cases takes time
(2) Costly Process :- Requires tools, testers, and infrastructure
(3) Not 100% Error-Free :- Some bugs may still remain undetected
(4) Requires Skilled Testers :- Needs knowledge and experience
(5) Environment Dependency :- Results may vary in different
environments
(6) Maintenance Effort :- Test cases need updates as software
changes
DEFECT REPORTING DURING
SOFTWARE TESTING:-
What is Defect?:- A defect in a
software product is also known as a bug, error or fault which makes the
software produce an unexpected result as per the software requirements. For
example; incorrect data, system hangs, unexpected errors, missing or incorrect
requirements.
WHAT IS A DEFECT
REPORT?:-
A Defect Report, also known as
a Bug Report, is a document created during the software testing process to
identify, describe, and track any issues or defects found in a software
application.
It provides detailed
information about the defect, including how it was discovered, the environment
in which it occurred, and steps to reproduce it. A defect report includes
complete details about the application / software defects, sources, what are
the actions needed to resolve them, and the expected result.
·
Developers can
check this defect report, understand the problem & its solution and make
improvements accordingly.
·
QA (quality
assurance) testers create a defect report to test & analyze each
functionality and note it in the report.
·
It requires lots
of effort and information to write a detailed defect report and deliver it to
developers.
So the primary purpose of a
good defect report is to identify the anomalies in the project and inform
developers with a well-organized structure.
Why create Defect
Reports?:- Defect reports are essential
in software development and testing for several reasons:-
i.
Clear
Communication:- They provide a structured
way to document issues, ensuring that developers understand the problem without
ambiguity.
ii. Accountability:- They assign responsibility for fixing the defect,
ensuring that it is addressed promptly.
iii. Tracking and Management:- Defect reports help in tracking the status of issues,
prioritizing them, and managing the overall quality of the software.
iv. Documentation:- Defect reports serve as a historical record of issues
found during testing, which can be useful for future reference or audits.
v. Software Quality:- Defect reports play a vital role in improving the
overall quality and reliability of the software by systematically identifying
and addressing defects.
Components of a
Defect Report:- These are the primary
elements which include:-
(1) Defect ID:- A unique identification number is used to identify the
number of defects in the report.
(2) Defect
Description:- A detailed description is
written about the defect with its module and the source where it was found.
(3) Version:- Show the current version of the application where the
defect was found.
(4) Action Steps:- It shows the step-by-step action taken by an end user
or QA testers that shows results as the defect
(5) Expected Result:-
Also, QA (quality assurance) testers need to identify & show the
expected result if everything goes accordingly without any defects, which
allows developers to know the end goals to achieve. It’s a detailed
step-by-step explanation of the expected result.
(6) Environment:- Sometimes the test environment plays a
significant role in defects like running applications on different smartphones.
(7) Actual Result:- It covers the results of performing the steps that
developers want to achieve. Also, add “Additional details” like defects,
sources, and specific steps so developers can navigate and resolve more
effectively.
(8). Severity:- It describes the impact of the defect on the
application. Each defect has a different severity level, and it’s important to
note it in detail.
Levels of Severity:-
·
Low:- These bugs
can be resolved once and don’t affect the performance again.
·
Medium:- Some
minor defects that are easy to resolve and affect less.
·
High:- These
bugs can affect the result of the application and need to be resolved.
·
Critical:- In
the critical stage, bugs heavily affect the performance and end goal. Like
crashing, system freezing, and restarting repeatedly.
How to write an
Effective Defect Report?:- Here are the step-by-step instructions to write an
effective defect report:-
Step 1. Create a Clear and Descriptive Title
·
What to Include:-
Summarize the defect in one line.
·
Why:- A clear
title helps quickly identify the nature of the issue. Avoid vague titles and be
specific about the problem.
·
Example:- “Login
button unresponsive on iOS devices”
Step 2. Provide a Detailed Description
·
What to Include:-
Explain the defect with details on what was expected versus what actually
occurred.
·
Why:- A detailed
description helps developers understand the defect fully and reproduce it
accurately.
·
Example:- “When
attempting to log in, the login button does not respond after entering
credentials, while it works correctly on Android devices.”
Step 3. List Steps to Reproduce the Defect
·
What to Include:-
Provide a step-by-step guide to replicate the issue.
·
Why:- Clear
reproduction steps help developers see the problem in action and verify the
fix.
Example:-
·
Open the app on
an iOS device.
·
Enter valid
credentials on the login screen.
·
Tap the login
button.
·
Observe that the
button does not respond.
Step 4. Specify the Environment
·
What to Include:-
Include details about the software version, operating system, hardware, and any
relevant configurations.
·
Why:- This helps
developers reproduce the defect under the same conditions and understand its
context.
·
Example:- “iOS
16.4, iPhone 12, App version 3.2.1”
Step 5. Assign Severity (level) and Priority
·
What to Include:-
Indicate the defect’s impact (severity) and the urgency for fixing it
(priority).
·
Why:- Helps the
development team prioritize the issue based on its impact on the application
and its users.
Example:
·
Severity:- Major
(Login functionality is broken)
·
Priority:- High
(Needs immediate attention to prevent user frustration)
Step 6. Attach Supporting Documentation
·
What to Include:-
Add screenshots, screen recordings, or log files that illustrate the defect.
·
Why:- Visual
evidence can clarify the issue and assist in diagnosing the problem more
effectively.
·
Example:- Attach
a screenshot of the unresponsive login button and a video showing the steps to
reproduce the defect.
Step 7. Provide Additional Comments or Observations
·
What to Include:-
Include any extra details or observations that might help in understanding the
defect or its impact.
·
Why:- Additional
context can assist developers in diagnosing the defect and finding a solution.
·
Example:- “The
issue seems to occur only when the device is in low battery mode.”
Step 8. Review and Revise
·
What to Include:-
Before submitting, review the report for completeness and clarity.
·
Why:- Ensures
that all necessary information is included and presented clearly, reducing the
chances of miscommunication or delays.
·
Example:- Check
that all fields are filled, the description is clear, and the reproduction
steps are accurate.
TEST AUTOMATION AND
TEST SCRIPTS IN SOFTWARE TESTING:-
TEST AUTOMATION IN
SOFTWARE TESTING:- Software testing is
performed by both automation and manual techniques. The automation testing is
performed by running the tests automatically with the help of tools. It
quickens the testing activities and makes the software development faster. The
suitable manual test cases are converted to automated tests and integrated with
the CI/CD (Continuous Integration and Continuous Delivery (or Continuous
Deployment) ) pipelines to make the development process quick, seamless,
error-free, efficient, and more productive.
The test automation involves
the creation of scripts with the help of automation tools in order to verify
the software. It allows execution of repetitive, and redundant test steps
without the help of any manual intervention. The manual test cases which are
not easy to execute are generally chosen for automation. The automated test
scripts can be executed any time as they follow a specific sequence to validate
the software.
The automated test cases
utilize the test data, and then comparisons are made between the expected and
actual outcomes. Finally, a detailed test report is generated. The main
objective of test automation is to lessen the count of manual test cases. The
entire automated test suite can be re-run several times in order to validate
the software. However, in spite of several advantages of automation over manual
testing, it can never fully replace the manual testing completely.
The test automation ultimately
speeds up the testing activities to a large extent. If integrated with the
CI/CD (Continuous Integration and Continuous Delivery (or Continuous
Deployment) ), then the automation gives even higher benefits during the
software development process as it performs continuous testing and easy
deployment.
Tools automation in software
testing:- Selenium, UFT, Appium, Silkuli, Cypress, PlayWright, Apache JMeter.
Automation Testing
Process:-
(1) Test Tool Selection:- There
will be some criteria for the Selection of the tool. The majority of the
criteria include: Do we have skilled resources to allocate for automation
tasks, Budget constraints, and Do the tool satisfies our needs?
(2) Define Scope of Automation:- This
includes a few basic points such as the Framework should support Automation
Scripts, Less Maintenance must be there, High Return on Investment, Not many
complex Test Cases
(3) Planning, Design, and Development: For
this, we need to Install particular frameworks or libraries, and start
designing and developing the test cases such as NUnit , JUnit , QUnit ,
or required Software Automation Tools.
(4) Test Execution: Final
Execution of test cases will take place in this phase and it depends on
Language to Language for .NET, we'll be using NUnit, for Java, we'll be using
JUnit, for JavaScript, we'll be using QUnit or Jasmine, etc.
(5) Maintenance: Creation
of Reports generated after Tests and that should be documented to refer to that
in the future for the next iterations.
Types of Automation
Testing:- The different automation testing
types are listed below
(1) Unit testing:- Unit testing is a phase in
software testing to test the smallest piece of code known as a unit that can be
logically isolated from the code. It is carried out during the development of
the application.
(2) Integration
testing:- Integration testing is a
phase in software testing in which individual software components are combined
and tested as a group. It is carried out to check the compatibility of the
component with the specified functional requirements.
(3) Smoke testing:- Smoke testing is a type of
software testing that determines whether the built software is stable or not.
It is the preliminary check of the software before its release in the market.
(4) Performance testing:- Performance testing is a
type of software testing that is carried out to determine how the system
performs in terms of stability and responsiveness under a particular load.
(5) Regression testing:- Regression testing is a
type of software testing that confirms that previously developed software still
works fine after the change and that the change has not adversely affected
existing features.
(6) Security testing:- Security testing is a type
of software testing that uncovers the risks, and vulnerabilities (weaknesses)
in the security mechanism of the software application. It helps an organization
to identify the loopholes in the security mechanism and take corrective
measures to rectify the security gaps.
(7) Acceptance testing:- Acceptance testing is the
last phase of software testing that is performed after the system testing. It
helps to determine to what degree the application meets end users' approval.
(8) API testing:- API testing is a type of
software testing that validates the Application Programming Interface(API) and
checks the functionality, security, and reliability of the programming
interface.
(9) UI Testing:- UI (User Interface) testing is
a type of software testing that helps testers ensure that all the fields,
buttons, and other items on the screen function as desired.
Advantages of Test
Automation:-
i.
The test
automation can be performed without any manual intervention, and can be left
unattended during execution. The test results are analyzed at the end of the
run. Thus it makes the execution process simple, and efficient.
ii. The test automation improves the reliability of the
software.
iii. The test automation increases the test coverage.
iv. The test automation reduces the chances of human
errors on testing.
v. The test automation saves a considerable amount of
time, effort, and money. It has a huge return on investment.
vi. The test automation gives a faster feedback of the
software resulting in the early detection of defects.
Disadvantages of Test
Automation
i. The setting up of test automation initially requires a lot of time, effort, and cost.
ii. A hundred percent test automation is not possible to
achieve.
iii. It is not feasible to automate all kinds of scenarios,
and use cases.
iv. The test automation can be performed by testers who
are experienced, and have the technical expertise, and programming knowledge.
v. The test automation may give false positive or
negative results if there are errors in the test scripts.
TEST SCRIPTS IN
SOFTWARE TESTING:-
What is a Test
Script?:- A test script is a set of
instructions that are intended (proposed) to test if the functionality of the software
works properly. The test script is part of automation testing, when the team
executes an automated test it’s based on a specific test script. The script is
written in a specific programming language like Java, Python, and more. The
tester writes the scripts and an automation tool performs the entire test,
based on the script.
Since test scripts are part of
automation testing, they help to gain its important benefits. Executing
automation testing with scripts will significantly shorten the testing process
time. Additionally, test scripts can be also reusable in the future.
Test Script in
Automation Testing:- Sample
Automated Script with Comments
# Test Scenario: Verify login
functionality
def test_login(driver):
driver.get("https://example.com/login") # Open login page
driver.find_element(By.ID,
"username").send_keys("user1") # Enter username
driver.find_element(By.ID,
"password").send_keys("pass123") # Enter password
driver.find_element(By.ID,
"login-btn").click() # Click
login button
assert "Dashboard" in
driver.title # Validate dashboard
loads
Example of a Test
Script:- We have an e-commerce
application where the user is logged in and has checked out the items in the
shopping cart. He is now on the payment screen, where he has to enter his
debit/credit card details. The script can be in any language of the tester’s
choice by following the below steps.
Identify the text boxes or
buttons or any element on the screen by its ID (For example, CSS element ID). Continuing
with the ID:- TC001 example use .
(1). Place the cursor on the first text box.
(2). Type the first four digits.
(3). Type the next four digits. The digits
should automatically appear in the following text box. If this happens, the
test status is ‘Pass’; if not, the test status is ‘Fail’.
(4). The text boxes should accept all 16 digits split into four
text boxes.
(5). Check if there are four digits in the
four text boxes. If yes, the test status is ‘Pass’; if not, the test status is
‘Fail’.
(5). Check if the 16-digit input is of number
data type. If yes, the test status is ‘Pass’; if not, the test status is
‘Fail’.
For the above script, the test
data can be numbers, alphabets, special characters and a combination of all
three. It ensures that the text box accepts only numbers and not any other data
type. This script can be coded in any scripting language with test data and
executed to test the application for these functionalities.
Types of Test
Scripts:-
(1) Linear (Record & Playback)
(2) Modular scripts
(3) Data-driven scripts
(4) Keyword-driven scripts
(5) Hybrid scripts
(1) Linear Scripts (Record & Playback):- A Linear Script is the simplest type of test
script where actions are recorded and then replayed exactly in the same
sequence.
Example:- Tester opens login page, Enters username & password, Clicks login, Tool records all steps and replays them.
(2) Modular Scripts:- In Modular Testing, the application is divided into
smaller modules, and separate scripts are written for each module. Simple
Meaning, Break system into parts → test each part separately
Example:- Login system modules:- Script 1: Open browser , Script 2: Enter credentials ,
Script 3: Click login
(3) Data-Driven Scripts:- A Data-Driven Script separates test data from the
script logic and runs the same test with multiple data sets. Simple Meaning, Same test → different inputs
Example:- Login test with
multiple users:-
|
Username |
Password |
|
admin |
1234 |
|
User1 |
abcd |
|
test |
5678 |
(4) Keyword-Driven Scripts:- In this approach, actions are defined using keywords
(like Login, Click, Enter), and the script executes based on those keywords.
|
Keyboard |
Action |
|
OpenBrowser |
Launch browser |
|
EnterUsername |
Type
username |
|
ClickLogin |
Click
button |
(5) Hybrid Scripts:- A Hybrid Script is a combination of two or more
scripting techniques (like Data-driven + Keyword-driven). Simple Meaning, Mix
of different methods for better results.
Example:- Use keywords for
actions, Use data-driven approach for
inputs
Advantages of Test Script:-
i.
Help meet user
and business requirements
ii. Validate the functionality of the software
iii. Allow performing fast and efficient automated testing
iv. Can be reusable in the future
Disadvantages of Test
Script:-
i.
Complex to design
ii. Requires expertise
DEFECT MANAGEMENT AND
TRACKING :-
DEFECT MANAGEMENT IN SOFTWARE
TESTING OR DEFECT MANAGEMENT PROCESS (DMP):-
A defect is any flaw or
deviation in a software application that causes it to function incorrectly or
differently from expected behavior.
Defect management in software
testing is the process of identifying, logging, prioritizing, tracking,
and resolving defects throughout the software development lifecycle. It gives
QA (quality assurance) and development teams a structured way to handle every
bug found during testing.
The process starts when a tester
detects a defect.
·
It is logged with
details like steps to reproduce, severity (level), priority, and environment
information.
·
Once logged, the
defect is assigned to the right developer for resolution.
·
After the fix is
applied, the tester verifies it and marks the defect as closed.
This structured approach
improves communication, reduces time to fix, and ensures no defect is left
untracked. It also builds a historical record of issues that can be used to
analyze trends and improve future releases.
Teams that adopt a proper
defect management process benefit from better defect tracking in QA, more
consistent releases, and stronger collaboration. This is why it is a key part
of every bug lifecycle in software testing and a foundation for higher
product quality.
Phases of Defect
Management in software Testing:-
(1) Defect Prevention:- During this phase, any errors discovered early in the
testing process are addressed and corrected since it is less costly and
minimizes the impact. Waiting to correct them at a later stage can be expensive
and detrimental to the app’s performance and overall quality.
This phase involves the
following steps:
i.
Identify risks:-
In this step, you find the risks related to the defects that could have a bad
impact on the app if left unaddressed or pushed to a later stage.
ii. Estimated impact:- In this step, you calculate the
impact or the cost associated with a critical risk found in the app.
iii. Reduce expected impact:- In this step, you work on
minimizing the potential impact that the discovered risk can cause. You either
try to eliminate the risk or reduce its occurrence and the associated impact.
(2) Deliverable Baseline:- The term ‘deliverable’ in the deliverable baseline,
i.e., the second phase of defect management, refers to the development, design,
or documentation being delivered. Once a
deliverable attains its predetermined milestone, you can consider it a
baseline. In this phase, the deliverables and existing defects move from one
milestone to the next.
(3) Defect Discovery:- Eliminating an app of all its defects is not possible
in one go. However, you can discover errors early on in the process before they
impact the system and increase the overall cost.
This phase involves the
following steps:-
i.
Defect
identification:- Identify the defect before it becomes a major issue
ii. Report the defect:- Report the identified defect to
the development team for a fix
iii. Acknowledge:- The development team then validates the
defect and proceeds to rectify it
(4) Defect Resolution:- Now that bugs have been identified and relevant
information has been recorded, informed decisions can be made about resolving
each defect. Naturally, fixing errors early on in the process helps save cost
and effort because errors tend to magnify as the software becomes more complex.
Here are the steps involved in
this phase:-
i.
Allocation:- Each
bug is allocated to a developer.
ii. Resolution:- Test Managers track the process about the
schedule above.
iii. Reporting: Test managers receive reports from
developers about resolved bugs and update their status on the database by
marking them Closed.
(5) Process Improvement:- Every discovered error impacts the overall app
quality. Therefore, in the process improvement phase, low-priority issues will
be addressed. All team members will investigate the root causes to enhance the
process effectively.
(6) Defect Management and Reporting:- Now that defects have been detected, categorized, and
resolved, step back and look at the big picture.
Defect analysis considers
inputs about singular defects, defect priorities, product issues, defect
resolution history, developers involved, and similar factors.
Advantages of Defect
Management
i.
With a systematic
defect tracking process, you can identify and resolve defects early on and
improve the app’s quality.
ii. Early identification and addressing of defects reduces
the impact and the cost of fixing them at a later stage when the app becomes
more sophisticated.
iii. Proper monitoring and prioritization of defects help
in better resource allocation and project planning.
iv. Defect management speeds up the development process
and reduces the disruptions caused by unidentified issues, thus reducing the
time to market.
Disadvantages of Defect Management
i.
If the defect
management process is not managed correctly, the cost will increase drastically
over time.
ii. Defect management can be time-consuming, which in turn
slows down the development process.
iii. Once the defects are fixed, regression testing should
be performed, which again consumes time and manual effort.
DEFECT TRACKING IN SOFTWARE
TESTING:-
Defect tracking, alias Bug
tracking, is the systematic process of identifying, recording, monitoring, and
managing defects or issues in a product or system throughout its development
lifecycle. These defects can include various aspects, including software bugs,
hardware malfunctions, design flaws, or other imperfections that may hinder the
product’s functionality, performance, or quality.
Tools for Defect
Tracking:-
There are numerous tools
available for efficient defect tracking. Here are a few well-known ones:
(1) JIRA:- (Note:- no full form
of JIRA) A commonly used tool with
strong project management and defect tracking features.
(2) Bugzilla:- A popular, packed, and portable open-source
defect tracking system.
(3) Redmine:- One of the Key
benefits of this web-based project management application is defect tracking.
(4) MantisBT:- A flexible and
user-friendly open-source defect tracking tool.
(5) HP ALM/Quality Center:- An all-inclusive test-management tool that
effortlessly combines defect tracking with additional testing tasks.
Different Phases of
Defect Tracking:-
It proceeds through the
following phases:-
(1) New:- Defects are
identified and reported for the first time and hence are said to be in the
“New” state.
(2) Assigned:- The defect is
assigned to a specific developer or development team by a team lead or manager.
(3) Open:- The assigned
developer begins work on the defect by moving it to an “Open” state.
(4) Fixed:- After the developer
has coded for fixing, then it is marked as “Fixed.”
(5) Verified:- The fix is
verified by the testing team, after which the defect is marked as “Verified.”
If not, it may be reopened or reassigned back to the developer.
(6) Closed:- After being
verified successfully, this defect is marked as “Closed,” denoting that it has
been resolved and that the code is release-ready.
(7) Deferred (Delayed): The
decision to defer means that the resolution of the defect will be put off to another
release or update.
How to Design a
Defect Tracking System / Process:-
The process generally consists
of the following:-
(1) Defect Detection
(2) Defect Categorization and Assessment
(3) Defect Fixing
(4) Verifying Defect Resolution
(5) Defect Reports
(6) End of Project Defect Metrics
(1) Defect Detection:- Defect detection is the first step to defect
tracking. Defects can be found by developers, test engineers, and product
testers or during the beta testing phase. Sometimes defects are found following
release as well. Once a defect is found, log it using a defect tracking tool.
(2) Defect Categorization and Assessment:- A report including the specifics of the defects is
provided to the developer once they have been logged. The developer then determines
the severity and scale of impact represented by each defect. Following this
evaluation, the defects are segmented into categories such as critical, high
severity, medium severity, and low severity, helping developers prioritize and
fix defects according to this scale.
(3) Defect Fixing:- Defect resolution is performed by developers, and it
is a multi-step process to evaluate and fix defects, where: -
i.
Defects are first
assigned to developers with relevant knowledge.
ii. Developers then perform a root cause analysis on the
defects, based on their priority level. This helps them understand why the
defect happened and how it can be addressed.
iii. The developer then solves the defect and writes a
comprehensive defect report explaining the solution and resolution process.
iv. This report is once again added to the bug tracker and
sent to the software testing team so they can carry out additional testing.
(4) Verifying Defect Resolution:- The software testing team will check the resolution
report and retest the solution to ensure that the defect is resolved. If
further defects have arisen downstream or the defect is not fixed, then the
team would once again share the defects on the defect tracker and share the
issues with the developers with relevant knowledge. On the other hand, if the
defect is truly resolved, then the defect status will be updated as closed / resolved
in the defect tracker.
(5) Defect Reports:- The software testing team creates reports for
defects. This report explains: -
i.
Defects found in
specific modules,
ii. Why they happened,
iii. How they were fixed,
iv. Analyzing the trends of the defects, and
v. How they were traced.
(6) End of Project Defect Metrics:- The development team learns through the process of
tracking defects. Metrics of the defect finding and fixing process are
calculated to evaluate performance, and data from the defect tracking process
is recorded for future reference and to prevent recurrences.
Defect Tracking
Challenges :-
(i) Use a defect tracking tool:- Only a tool can help to automate the tracking
process and improve accuracy.
(ii) Define clear defect reporting guidelines:- Clear defect
reporting guidelines will help to ensure that defect reports are complete and
accurate.
(iii) Establish a communication plan:- This will help to ensure that
stakeholders are kept informed of the status of defects.
(iv) Regularly review the process:- Regularly reviewing the process
will help identify and address any gaps or weaknesses.
(v) Train the users:- Training the users on how to use the defect
tracking system will help ensure the system is used effectively.
THE END UNIT 3 (TEST
EXECUTION AND DEFECT MANAGEMENT)
PERFORMANCE AND
SECURITY TESTING
WHAT IS PERFORMANCE
TESTING IN SOFTWARE TESTING:-
Performance testing is a
software testing method used to assess how well an application functions under
expected workloads, network conditions, and data volumes. Its primary objective
is to ensure the application is fast, stable, scalable, and responsive across environments.
For instance, before launching
a gaming app, performance testing helps verify that it loads quickly, renders
visuals correctly, handles multiplayer interactions smoothly, and runs
efficiently across devices.
Types of Performance
Testing:-
Performance testing focuses on
evaluating how well an application performs under various conditions.
(1) Load Testing
(2) Stress Testing
(3) Scalability
testing
(4) Endurance testing
(5) Spike testing
(6) Volume testing
(1) Load Testing:- Load testing measures
how your application performs under a specific number of users or transactions
also called load conditions. This test ensures the system can handle expected
traffic while maintaining an optimal user experience. Simulates expected user
traffic to determine how the system handles normal usage.
When to use load
testing:- Before launching a new
application or feature, To benchmark
performance during the development process.
Example:- An online retail store anticipates heavy traffic
during Black Friday. Running performance test scenarios with 1,000 simultaneous
users browsing and making purchases ensures the site can handle peak loads.
Formula example:
- Throughput
= Total Transactions / Total Time
Metrics to monitor:-
(i) Response time:- Time taken
to load a page or process a transaction.
(ii) Throughput:- Number of
transactions processed per second.
(iii)Error rate:- Percentage of
failed transactions.
(iv) Number of virtual users:-
The number of simulated users accessing the application.
(2) Stress Testing:- Stress testing evaluates the software’s performance
under extreme load conditions. This helps identify the breaking point and
ensures the system can recover gracefully from stressful situations.
When to use stress
testing:- To prepare for unexpected
traffic points, To identify limits and
performance jams.
Example:- E-commerce website,
Push an e-commerce website to
handle 10,000 simultaneous users to find the breaking point.
Metrics to monitor:-
(i) System stability: Ability to
remain operational under stress.
(ii) Recovery time: Time taken
to recover from failure.
(iii) Error handling:
Effectiveness in managing errors under stress.
(3) Scalability testing:- Scalability testing determines how your system adapts
to an increasing number of users or transactions over time, making it critical
for applications with growth potential.
When to use
scalability testing:- When anticipating a
growing user base or data volume, After significant architectural updates to
the system.
Example:- A cloud-based service tests its ability to scale from
100 to 10,000 users without affecting latency or load time benchmarks.
Metrics to monitor:-
(i)
Resource utilization:- CPU, memory,
and disk usage.
(ii)
Response times-: Performance
consistency as load increases.
(iii)
Scalability factor: -Ratio of
increased performance to increased load.
(4) Endurance testing:- Endurance testing, also known as soak testing, checks
the application’s performance over an extended period to identify memory leaks
and performance degradation.
When to use endurance
testing:- Before long-term deployment., To ensure stability under sustained load.
Example:- Financial application: Run a financial application continuously for a
month to check for memory leaks or performance degradation.
Metrics to monitor:-
(i) Memory usage: Track for potential leaks.
(ii) Response times: Identify performance degradation over time.
(iii) System health: Overall stability during the test
period.
(5) Spike testing:- Spike testing evaluates an application’s performance
by simulating sudden and extreme increases in traffic over a short period.
Unlike stress testing, spike testing focuses on how the system handles and
recovers from sharp traffic surges.
When to use spike
testing:- Before promotional events,
product launches, or flash sales, To prepare for scenarios with sudden traffic
spikes, such as viral social media campaigns.
Example:- E-commerce platform:- Simulate a 10x traffic surge during a Black Friday
promotion to ensure the system can handle sudden spikes in user activity.
Metrics to monitor:-
(i) Response time:- Measure how quickly the system responds
during and after the traffic surge.
(ii) Error rate:- Track the percentage of failed requests
during the spike.
(iii) Recovery time:- Evaluate how quickly the system
stabilizes after the spike subsides.
(iv) System stability:- Monitor the system’s ability to remain
operational under extreme conditions.
(v) Resource utilization:- Assess CPU, memory, and disk usage
during the spike to identify resource constraints.
(6) Volume testing:- Volume testing focuses on assessing how an application
performs when processing large amounts of data rather than a high number of
users. This test helps identify issues like data overflow or performance
degradation.
When to use volume
testing:- After implementing new
data-heavy features., When scaling your system to handle increased data loads.
Example:- Database testing: Test a database’s ability to handle importing
millions of records to ensure no significant degradation in performance.
Metrics
to monitor:-
(i) Data throughput:- Volume of data processed per second,
ensuring bulk operations remain efficient.
(ii) Query execution time:- Time taken for database queries to
complete under high data loads.
(iii) Disk I/O:- The rate of data read/write operations
is high; usage could indicate a jam.
(iv) Memory usage:- Track for excessive consumption or leaks
during large data operations.
(v) Error rate:- Percentage of failed data operations,
ensuring data reliability and integrity.
(v) Database indexing
efficiency:- Performance of
queries on indexed fields, preventing slowdowns as data grows.
Performance Testing
Tools:-
(i) Apache JMeter:- Open-source tool for load testing web apps and
APIs.
(ii) Gatling:- Scalable,
developer-friendly tool using asynchronous I/O for web app testing.
(iii) LoadRunner:-
Enterprise-grade load testing for various application types.
(iv) BlazeMeter:- Cloud-based
platform supporting JMeter, Gatling, and Selenium.
(v) Locust:- Python-based tool for user load simulation on
websites.
(vi) K6:- Scriptable load
testing focused on APIs and modern web apps.
(vii) Apache Bench:- Simple,
command-line benchmarking tool for HTTP servers.
(viii) NeoLoad:- Advanced
enterprise tool for load testing complex systems.
(ix) Tsung:- Distributed tool
for stress testing web and protocol-based systems.
(x) WebLOAD:- Enterprise
solution with support for complex load scenarios.
Performance Testing
Process:-
(1) Set Up the Right Test Environment:- Use a test setup that mirrors your production
environment as closely as possible. For accurate results, test on real devices
and browsers using a real device cloud like BrowserStack Automate. It enables
testing across 3500+ device-browser-OS combinations, simulating real user
conditions like low network, battery levels, or location changes
(2) Define Performance Benchmarks:- Establish clear success criteria like response time,
throughput, resource usage, and error rates. Use project requirements to define
measurable goals and thresholds.
(3) Design Test Scenarios:- Create test cases that reflect real user behavior.
Include varied usage patterns and peak load conditions. Automate where possible
to speed up execution and minimize human error.
(4) Prepare Tools & Test Environment:- Configure all necessary tools, integrations, and test
data. Ensure version control and environment variables are properly set up for
consistency.
(5) Run Tests:- Execute test suites under controlled conditions. Use
parallel testing to reduce execution time while maintaining accuracy.
(6) Analyze, Debug & Re-Test:- Review key metrics, identify bottlenecks, and log
issues. Once fixes are made, re-run the tests to validate improvements and
ensure the system is ready for production.
Advantages of
Performance Testing:-
i.
Identifies
performance jams such as slow database queries, memory leaks, and network
issues
ii. Improves scalability by determining how the system
performs as user load increases
iii. Enhances reliability and stability under normal and
peak workloads
iv. Reduces production risks by detecting performance
issues early
v. Cost-effective compared to fixing performance problems
after deployment
vi. Improves user experience by ensuring fast and
responsive application behavior
vii. Supports future growth by preparing the system for
traffic spikes
viii.Helps meet industry and compliance standards
ix. Provides deeper system insight by revealing behavior
under different load conditions
Disadvantages of
Performance Testing:-
i.
Resource-intensive,
requiring significant hardware, tools, and infrastructure
ii. Complex to design and execute, requiring skilled
professionals
iii. Limited coverage, as it focuses mainly on performance
issues
iv. Results may be inaccurate if the test environment
differs from production
v. Difficult to simulate real-world user behavior
accurately
vi. Time-consuming analysis due to large volumes of test
data
vii. High dependency on expertise for scripting,
monitoring, and debugging
viii.Cannot guarantee zero performance issues in real
production environments
SECURITY TESTING IN
SOFTWARE TESTING:-
Security testing in software
testing is the process of evaluating your software to identify vulnerabilities
or weaknesses that could be exploited by hackers or attackers. The importance
of security testing cannot be overstated, as it helps to ensure that software
is secure and can protect sensitive data and information from unauthorized
access or misuse.
One crucial benefit of security
testing is that it protects against cyber-attacks which are becoming
increasingly sophisticated. Security testing can help identify potential
security threats in your software, allowing you to take steps to address them
before they cause major problems.
Also, security testing helps
you stay compliant with regulations. Depending on your industry and region,
there may be specific regulations and standards that your software must meet.
Security testing can help ensure that the software meets these requirements,
avoiding potential penalties or legal issues.
Types of Security
Testing:-
(1) Vulnerability Scanning:- Vulnerability or weaknesses scanning is the process of
scanning software for known vulnerabilities or weaknesses. This type of testing
involves using automated testing tools to identify potential security flaws in
your software. Examples of such flaws include outdated software components,
weak passwords, or insecure network configurations.
Vulnerability scanning can help
identify security weaknesses that may be present in your software before they
can be exploited by attackers.
A vulnerability scan will look
for any missing security patches, weak passwords in the system, malware and
report the potential exposure of treasure box when it is scanning.
This type of Scanning is
automated and can also be scheduled weekly, monthly, quarterly depending on the
organization. is a Qualified Security Assessor (QSA).
(2) Penetration Testing :- Penetration testing, also known as “pen testing,”
involves simulating a real-world attack on your software to identify
vulnerabilities and weaknesses. This type of testing typically involves ethical
hackers or security professionals attempting to exploit security weaknesses in
your software.
The purpose of a penetration
test is not just to see whether or not specific vulnerabilities exist within a
system but also to determine the level of risk posed by these vulnerabilities.
Therefore, a penetration test performed by security professionals should reveal
all the potential risks and offer mitigation strategies against such threats.
Types of Penetration
Testing:-
(i) Network Penetration Testing- This type focuses on testing the security of network
infrastructure such as servers, routers, switches, and firewalls.
Example:- A tester tries to
break into a company’s internal network through weak Wi-Fi security.
(ii) Web Application Penetration Testing:- This type tests websites and web applications for
vulnerabilities.
Example:- Testing a login page
to see if attackers can access accounts without valid credentials.
(iii) Mobile Application Penetration Testing:- Focuses on testing Android and iOS apps for security
issues.
Example:- Checking if sensitive
user data is stored insecurely in a mobile app.
(iv) Social Engineering Penetration Testing:- Targets human behavior instead of systems.
Example:- Sending fake emails
to employees to trick them into revealing passwords.
(v) API Penetration Testing:- Tests Application Programming Interfaces (APIs) used
by apps.
Example:- Manipulating API
requests to access other users’ data.
(vi) Black Box, White Box, and Gray Box Testing
·
Black Box
Testing:- No prior knowledge of the system, Simulates an external hacker
·
White Box
Testing:- Full access to system details, Deep security analysis
·
Gray Box Testing:-
Partial knowledge, Balanced approach
(3) Risk Assessment:- Risk assessment involves identifying potential threats
to your software and assessing the likelihood and negative impacts of those
threats. This type of testing typically involves analyzing the software’s
architecture, design, and implementation to identify potential security risks —
for example, data breaches, denial of service (DOS) attacks, or malware and
viruses.
(4) Security Scanning:- Security scanning involves using automated tools to
scan software for potential security vulnerabilities. These tools may include
software or hardware-based scanners that can detect a wide range of security
issues.
Security scanning may include
tests for common vulnerabilities such as SQL injection, cross-site scripting
(XSS), and buffer overflow attacks.
(5) Posture Assessment:- Posture assessment involves analyzing the overall
security posture of your software. This type of testing typically involves
reviewing the software’s security policies and procedures to identify
loopholes.
During posture assessment,
security experts may review your access controls and software endpoints to help
you prevent malicious attacks that may be targeted at your software.
(6) Security Auditing:- Security auditing involves a comprehensive evaluation
of the software’s design, implementation, and operational processes to identify
gaps in your security controls. When running security audits, start by defining
a scope and objective that outlines the purpose, goals, and expected outcomes
of your audit.
Types of Security
Testing Tools:-
(i) SAST (Static Application
Security Testing):- Analyzes the source code to identify security flaws without
executing the program. It helps developers identify and fix vulnerabilities
early in the development process.
(ii) DAST (Dynamic Application
Security Testing):- Tests running
applications to identify security vulnerabilities. It simulates real-world
attacks like SQL injection or cross-site scripting (XSS) and is typically used
for web applications.
(iii) IAST (Interactive
Application Security Testing):- Combines both static and dynamic testing to
provide real-time feedback during the application’s runtime. It offers deeper
insights into the security of the application by continuously monitoring code
flow.
(iv) SCA (Software Composition
Analysis):- Scans third-party libraries and dependencies used in the
application for known vulnerabilities, license issues, and outdated components.
Advantages of
Security Testing:-
i.
Identifying
vulnerabilities:- Security testing helps identify vulnerabilities in the system
that could be exploited by attackers, such as weak passwords, unpatched
software, and misconfigured systems.
ii. Improving system security:- Security testing helps
improve the overall security of the system by identifying and fixing
vulnerabilities and potential threats.
iii. Reducing risk:- By identifying and fixing
vulnerabilities and potential threats before the system is deployed to
production, security testing helps reduce the risk of a security incident
occurring in a production environment.
iv. Improving incident response:- Security testing helps
organizations understand the potential risks and vulnerabilities that they
face, enabling them to prepare for and respond to potential security incidents.
Disadvantages of
Security Testing
i.
Resource-intensive:-
Security testing can be resource-intensive, requiring significant hardware and
software resources to simulate different types of attacks.
ii. Complexity:- Security testing can be complex,
requiring specialized knowledge and expertise to set up and execute
effectively.
iii. Limited testing scope:- Security testing may not be
able to identify all types of vulnerabilities and threats.
iv. False positives and negatives:- Security testing may
produce false positives or false negatives, which can lead to confusion and
wasted effort.
v. Time-consuming:- Security testing can be
time-consuming, especially if the system is large and complex.
vi. Difficulty in simulating real-world attacks:- It's
difficult to simulate real-world attacks, and it's hard to predict how
attackers will interact with the system.
USABILITY TESTING AND USER EXPERIENCE EVALUATION:-
USABILITY TESTING IN SOFTWARE
TESTING:- Usability testing is all about
the usage of the software by the actual users to determine their experiences
while using it, and to find any errors with respect to its design, and
functionalities. This type of testing cannot be done by complete automation.
The suggestions obtained from the end users at the time of usability testing
are incorporated into the software. As a result, it ensures that the software
is successful in satisfying the needs and expectations of the customers.
Only a few parts of the usability
testing namely gathering data and some evaluations may be automated, however
the actual testing needs to be performed with human intervention as it involves
manual observation to collect feedback and overall user experiences.
But after execution of a few
rounds of usability tests, some of them may be moved to automation which
ultimately saves time and effort. In short, the usability testing measures how easily
a software can be used by the customers. It identifies bottlenecks in the
graphical user interfaces, features, and overall customer involvement with the
software.
Types of Software
Usability Testing:-
(1) Explorative Usability Testing:- It is done to analyze the usability of the software
and find flaws when the end users start using it. It gathers data on how the
users respond while working with the software. It gathers the customer feedback
which are incorporated into the software to improve its designs and features.
It ensures that the developed software is intuitive and very easy to use.
(2) Comparative Usability Testing:- It is done to analyze the user involvement by
comparing the usability parameters of various software or different versions of
the same software. It determines how one software exceeds the others and
accordingly prioritizes the improvements. It evaluates the user choices and
preferences which help in taking decisions on the design changes and new
features.
(3) Assessment Usability Testing:- It is done to analyze how easily the users are
interacting with the software, and what factors can be improved. It measures
the activity completion rate, correctness, and duration. The outcomes of these
tests are used to make the user experience more optimal.
(4) Validation Usability Testing:- It is done to analyze any bottlenecks that impact the
end user experiences. It accurately identifies the target users, their
requirements, and use cases.
Various Phases of
Software Usability Testing:-
(1) Defines Goals and Scope:- Define research objectives, target audience, and the
scope of the testing. This includes identifying the specific usability aspects
you want to evaluate and the type of testing (qualitative, quantitative, etc.)
that best suits your goals
(2) Recruit Participants:- Find participants that represent the target user
base. Consider demographics, behavior patterns, and any other relevant factors
to ensure the feedback accurately reflects your actual users
(3) Preparation:- Develop the testing materials, including tasks for
users to complete, a script for moderators (if applicable), and any data
collection tools you’ll be using, as well as set up the testing environment,
whether physical or digital, to ensure a smooth testing experience
(4) Conduct the Usability Testing:- Conduct the actual usability testing sessions. This
can involve moderated sessions with a facilitator guiding users and asking
questions or unmoderated sessions where users complete tasks independently
(5) Data Analysis:- Analyze the data collected during testing. This might
involve observations, recordings, surveys, and task completion metrics. Look
for patterns, identify usability issues, and categorize user feedback
(6) Reporting:- Document your findings and recommendations in a clear
and concise report. Present the identified usability problems, user quotes, and
data visualizations to communicate insights to stakeholders effectively
(7) Implementation:- Take action on the identified issues. This involves
prioritizing recommendations, making design changes, and potentially conducting
follow-up testing to ensure the implemented solutions effectively address the
usability problems.
Advantages of
Software Usability Testing:-
i.
The usability
testing identifies defects from the early stages of the SDLC which saves cost
and time.
ii. The usability testing improves the customer experience
thereby increasing the higher conversion rates.
iii. The usability testing reduces the overall support
cost.
Disadvantages of
Software Usability Testing:-
i. Usability testing is expensive and time consuming if it involves a large number of users.
ii. It is not easy to identify the target audiences.
iii. The usability testing does not cover the edge cases
and complex scenarios.
USER EXPERIENCE EVALUATION
IN SOFTWARE TESTING:-
User experience refers to the
perception and reactions of users when interacting with a digital product or
service. A good UX not only implies attractive designs, but also functional,
intuitive systems focused on the user’s real needs.
User Experience (UX) testing is
the process of evaluating how real users interact with a product, website, or
application. It focuses on identifying usability issues and improving the
overall experience. During UX testing,
users are observed as they perform tasks, providing valuable insights into how
they navigate through the product. This testing can include a variety of
methods such as usability tests, surveys, and user interviews.
The primary goal of UX testing
is to understand user behavior, by pinpointing pain points and measuring how
well the product meets user needs.
Types of UX Testing:-
(1) Usability Testing:-The main goal here is to identify usability issues
that could prevent users from completing tasks easily and efficiently. These issues could include confusing
navigation, unclear instructions, or poor functionality. Usability testing can
be conducted in various ways. In moderated usability testing, a facilitator
guides the user through tasks.
(2) User Interviews:- User interviews provide valuable qualitative data by
directly engaging with users. During these one-on-one sessions, users are asked
about their experiences, motivations, and challenges with a product. The
interviewer asks open-ended questions, allowing users to share their thoughts
in their own words. User interviews are flexible and can be conducted at any
stage of the product life cycle. They help businesses understand the “why”
behind user actions, providing insights into user needs and preferences.
(3) Surveys and Questionnaires:- Surveys and questionnaires are effective tools for
gathering quantitative data from a larger user base. Surveys provide valuable
insights into user preferences, behaviors, and attitudes. They are often used
after product launch to measure user satisfaction or to test specific features.
The data collected from surveys can be analyzed statistically to identify
trends and patterns.While surveys are great for collecting numerical data, they
should be used in conjunction with other testing methods like usability testing
or user interviews.
(4) A/B Testing:- (Full form
"Version A / Version B) A/B
testing involves comparing two versions of a product to determine which one
performs better in terms of user engagement, conversion rates, or other key
metrics.
Users are randomly assigned to
one of two versions—Version A or Version B—and their behavior is monitored. This
type of testing is particularly useful when businesses need to make decisions
between two design options.
For example, an A/B test might
compare two website layouts or two call-to-action buttons to see which one
leads to higher click-through rates.
A/B testing helps businesses
make data-driven decisions, improving user engagement and satisfaction. A/B
testing is often used in conjunction with other UX testing methods to validate
hypotheses and optimize product performance.
(5) Diary Studies and Long-Term Feedback:- Diary studies involve asking users to document their
experiences with a product over an extended period of time.
Users may record their
interactions, thoughts, and feelings in a physical or digital diary. This
method is useful for gathering long-term feedback, especially when it comes to
products that are used regularly or over time. Diary studies provide insights
into how user behavior evolves and how products are integrated into their daily
lives.
They also capture emotional responses and usage patterns that might not be evident in short-term testing. Long-term feedback is invaluable for understanding how users adapt to and engage with a product in the long run.
Steps in the UX
Evaluation Process:-
Step 1:- Define Goals and Objectives:- Begin by
identifying the purpose of the usability evaluation. Determine what specific
tasks or design elements need to be evaluated.
Step 2:-
Identify Target Users:- Understand the target audience and select
real users who match the intended demographics for usability testing.
Step 3:- Choose Usability Evaluation Methods:- Depending
on the product and goals, select appropriate usability evaluation methods, such
as heuristic evaluation, user testing, or focus groups.
Step 4:-
Conduct the Evaluation:- Observe user interactions
and collect data as users complete tasks.
Use tools and techniques like:- User interviews, Usability
heuristics, Task analysis, Heuristic analysis
Step 5:- Analyze
Results:- Analyze the collected data to identify usability
problems, design flaws, and areas of improvement. Pay attention to user
feedback, behavior patterns, and task completion rates.
Step 6:- Implement
Changes:- Based on the evaluation results, make improvements to
the product. Address usability issues, refine the visual design, and optimize
the user experience design.
THE END UNIT 4 (PERFORMANCE AND SECURITY TESTING )

No comments:
Post a Comment
PLEASE DO LEAVE YOUR COMMENTS